57 research outputs found
Source Polarization
The notion of source polarization is introduced and investigated. This
complements the earlier work on channel polarization. An application to
Slepian-Wolf coding is also considered. The paper is restricted to the case of
binary alphabets. Extension of results to non-binary alphabets is discussed
briefly.Comment: To be presented at the IEEE 2010 International Symposium on
Information Theory
Channel combining and splitting for cutoff rate improvement
The cutoff rate of a discrete memoryless channel (DMC) is often
used as a figure of merit, alongside the channel capacity . Given a
channel consisting of two possibly correlated subchannels , , the
capacity function always satisfies , while there are
examples for which . This fact that cutoff rate can
be ``created'' by channel splitting was noticed by Massey in his study of an
optical modulation system modeled as a 'ary erasure channel. This paper
demonstrates that similar gains in cutoff rate can be achieved for general
DMC's by methods of channel combining and splitting. Relation of the proposed
method to Pinsker's early work on cutoff rate improvement and to Imai-Hirakawa
multi-level coding are also discussed.Comment: 5 pages, 7 figures, 2005 IEEE International Symposium on Information
Theory, Adelaide, Sept. 4-9, 200
Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels
A method is proposed, called channel polarization, to construct code
sequences that achieve the symmetric capacity of any given binary-input
discrete memoryless channel (B-DMC) . The symmetric capacity is the highest
rate achievable subject to using the input letters of the channel with equal
probability. Channel polarization refers to the fact that it is possible to
synthesize, out of independent copies of a given B-DMC , a second set of
binary-input channels such that, as becomes
large, the fraction of indices for which is near 1
approaches and the fraction for which is near 0
approaches . The polarized channels are
well-conditioned for channel coding: one need only send data at rate 1 through
those with capacity near 1 and at rate 0 through the remaining. Codes
constructed on the basis of this idea are called polar codes. The paper proves
that, given any B-DMC with and any target rate , there
exists a sequence of polar codes such that
has block-length , rate , and probability of
block error under successive cancellation decoding bounded as P_{e}(N,R) \le
\bigoh(N^{-\frac14}) independently of the code rate. This performance is
achievable by encoders and decoders with complexity for each.Comment: The version which appears in the IEEE Transactions on Information
Theory, July 200
On the Rate of Channel Polarization
It is shown that for any binary-input discrete memoryless channel with
symmetric capacity and any rate , the probability of block
decoding error for polar coding under successive cancellation decoding
satisfies for any when the block-length
is large enough.Comment: Some minor correction
Trellis coding for high signal-to-noise ratio Gaussian noise channels
It is known that under energy constraints it is best to have each code word of a code satisfy the constraint with equality, rather than have the constraint satisfied only in an average sense over all code words. This suggests the use of fixed-composition codes on additive Gaussian noise channels, for which the coding gains achievable by this method are significant, especially in the high signal-to-noise-ratio case. The author examines the possibility of achieving these gains by using fixed-composition trellis codes. Shell-constrained trellis codes are promising in this regard, since they can be decoded by sequential decoding at least at rates below the computational cutoff rate
Inequality on guessing and its application to sequential decoding
Let (X,Y) be a pair of discrete random variables with X taking values from a finite set. Suppose the value of X is to be determined, given the value of Y, by asking questions of the form 'Is X equal to x?' until the answer is 'Yes.' Let G(x|y) denote the number of guesses in any such guessing scheme when X = x, Y = y. The main result is a tight lower bound on nonnegative moments of G(X|Y). As an application, lower bounds are given on the moments of computation in sequential decoding. In particular, a simple derivation of the cutoff rate bound for single-user channels is obtained, and the previously unknown cutoff rate region of multi-access channels is determined
Markov modulated periodic arrival process offered to an ATM multiplexer
When a superposition of on/off sources is offered to a deterministic server, a particular queueing system arises whose analysis has a significant role in ATM based networks. Periodic cell generation during active times is a major feature of these sources. In this paper a new analytical method is provided to solve for this queueing system via an approximation to the transient behavior of the nD/D/1 queue. The solution to the queue length distribution is given in terms of a solution to a linear differential equation with variable coefficients. The technique proposed here has close similarities with the fluid flow approximations and is amenable to extension for more complicated queueing systems with such correlated arrival processes. A numerical example for a packetized voice multiplexer is finally given to demonstrate our results
Joint source-channel coding and guessing
We consider the joint source-channel guessing problem, define measures of optimum performance, and give single-letter characterizations. As an application, sequential decoding is considered
- …